Members
Overall Objectives
Research Program
Software and Platforms
New Results
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Spike trains statistics

Decoding the retina with the first wave of spikes

Participants : John Barrett [Institute of Neuroscience, Medical School, Newcastle University, Newcastle UK] , Pierre Kornprobst, Geoffrey Portelli, Evelyne Sernagor [Institute of Neuroscience, Medical School, Newcastle University, Newcastle UK] .

Understanding how the retina encodes visual information remains an open question. Using MEAs on salamander retinas  [60] showed that the relative latencies between some neuron pairs carry sufficient information to identify the phase of square-wave gratings (Using gratings of varying phase, spatial frequency, and contrast on mouse retinas, we extended this idea by systematically considering the relative order of all spike latencies, i.e. the shape of the first wave of spikes after stimulus onset. The discrimination task was to identify the phase among gratings of identical spatial frequency. We compared the performance (fraction correct predictions) of our approach under classical Bayesian and LDA decoders to spike count and response latency of each recorded neuron. Best results were obtained for the lowest spatial frequency. There, results showed that the spike count discrimination performance was higher than for latency under both the Bayesian (0,95±0,02 and 0,75±0,11 respectively) and LDA (0,95±0,01 and 0,62±0,03 respectively) decoders. The first wave of spikes decoder is (0,46±0,06) less efficient than the spike count. Nevertheless, it accounts for 50% of the overall performance. Interestingly, these results tend to confirm the rank order coding hypothesis  [59] which we are currently investigating further.

This work has been presented in [45] .

Spike train statistics from empirical facts to theory: the case of the retina

Participants : Bruno Cessac, Adrian Palacios [CINV-Centro Interdisciplinario de Neurociencia de Valparaiso, Universidad de Valparaiso] .

This work focuses on methods from statistical physics and probability theory allowing the analysis of spike trains in neural networks. Taking as an example the retina we present recent works attempting to understand how retina ganglion cells encode the information transmitted to the visual cortex via the optical nerve, by analyzing their spike train statistics. We compare the maximal entropy models used in the literature of retina spike train analysis to rigorous results establishing the exact form of spike train statistics in conductance-based Integrate-and-Fire neural networks. This work has been published in Mathematical Problems in Computational Biology and Biomedicine, F. Cazals and P. Kornprobst, Springer [29] .

Hearing the Maximum Entropy Potential of neuronal networks

Participants : Bruno Cessac, Rodrigo Cofré.

We consider a spike-generating stationary Markov process whose transition probabilities are known. We show that there is a canonical potential whose Gibbs distribution, obtained from the Maximum Entropy Principle (MaxEnt), is the equilibrium distribution of this process. We provide a method to compute explicitly and exactly this potential as a linear combination of spatio-temporal interactions. The method is based on the Hammersley Clifford decomposition and on periodic orbits sampling. As an application, we establish an explicit correspondence between the parameters of the Ising model and the parameters of Markovian models like the Generalized-Linear Model. This work has been presented in several conferences [39] , [27] , and submitted to Phys. Rev. Letters [41] , see also the research report [31] .

Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

Participants : Bruno Cessac, Olivier Marre [Institut de la Vision, Paris, France] , Hassan Nasser.

Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. We present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. We also present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. This work has been presented in several conferences [39] , [15] , [44] and published in Journal of Statistical Mechanics [20] .

Spike train statistics and Gibbs distributions

Participants : Bruno Cessac, Rodrigo Cofré.

We introduce Gibbs distribution in a general setting, including non stationary dynamics, and present then three examples of such Gibbs distributions, in the context of neural networks spike train statistics: (i) Maximum entropy model with spatio-temporal constraints; (ii) Generalized Linear Models; (iii) Conductance based Inte- grate and Fire model with chemical synapses and gap junctions. This leads us to argue that Gibbs distributions might be canonical models for spike train statistics analysis. This work has published in J. Physiol. Paris [15] .

A maximum likelihood estimator of neural network synaptic weights

Participants : Bruno Cessac, Wahiba Taouali.

Given a conductance-based Integrate-and-Fire model where the spike statistics dependence on synaptic weights is known, can one reconstruct this network of synaptic weights from the observation of a raster plot generated by the network ? We have solved this inverse problem using an explicit expression of a maximum likelihood estimator based on the Newton-Raphson method. This estimator uses analytically computed gradients and Hessian of the likelihood function given by the product of conditional probabilities. The explicit form of these conditional probabilities can be found in [49] . Our results show that this method allows to estimate the set of connections weights knowing the input, the noise distribution and the leak function. This work has been presented in the CNS conference in Paris, 2013 [47] .